Thursday, July 29, 2004

Welcome to the Metropolis Disco - How do you move your particles?


Out on the dance floor tonight we have the spotlight on two papers on improving the efficiency of Monte Carlo moves. But first, a brief summary of the results of Peskun's theorem (Both papers mention it.). The Metropolis algorithm takes a trial move and accepts or rejects the move with some probability. If the move is rejected, that location is used in the average again. Rejected moves will increase the autocorrelation, and so increase the variance of the resulting average. Peskun's theorem validates this intuition, under certain conditions. So the result is that, just like dancing, moving is better than standing still.

The first paper is Delayed Rejection Variational Monte Carlo by Bressanini, Morosi, and Tarasco. The problem is that for best efficiency, moves near an atomic nucleus should have small step, and moves far away from a nucleus should have a large step. If a move is rejected, it is likely that the step size was too large. One could imagine a dynamic or adaptive step size method that lowers the step size when there are lots of rejections and increases it when there are lots of acceptances (*). This paper does something slightly different - when a move is rejected, it is retried with a smaller step, but the original rejected move is not made part of the average. The acceptance probability is adjusted to take this into account. So there is an increase in the amount of time it takes for a move, but that is made up for by a larger decrease in the autocorrelation time.

The second paper is Optimal Monte Carlo Updating by Pollet, Rombouts, Van Houcke, and Heyde. They work with discrete systems, and introduce a a procedure called "Metropolizing" - which is to take a move rule (eg, heat bath) and trasform it to a new, more efficient transition matrix. This process can be applied repeated until there is only one non-zero diagonal element, resulting in a very efficient transition matrix (For single site updating rules).



(*) I'm not entirely sure a simple dynamic step size method would satisfy detailed balance. I think it would, but that should be checked.

Wednesday, July 21, 2004

More Graphic Card Computations



There's a site, GPGPU (General Purpose computation on Graphics Processing Units), dedicated to interesting things one can do with a graphics card. (Well, there's always graphics, but using things for their intended purpose can get boring :)

Last fall, I wondered about the possiblity of doing Monte Carlo on the graphics card.

On the GPGPU site there's a link to a paper about doing Ising model and percolation simulations on the graphics card (Benchmarking and Implementation of Probability-Based Simulations on Programmable Graphics Cards). The work is primarily for proof of concept and benchmarking, with an eye towards lattice QCD simulations eventually. Hmm. Play a killer game of Quake *and* uncover fundamental secrets of the universe - what a combination!

I can imagine students requesting the latest "Ising co-processors" so they can "make progress on their thesis" when Doom 3 comes out. (Err, maybe they need to leave off that last part when making the request).